模型不合时宜的元学习(MAML)是最成功的元学习技术之一。它使用梯度下降来学习各种任务之间的共同点,从而使模型能够学习其自身参数的元定义,以使用少量标记的培训数据快速适应新任务。几次学习的关键挑战是任务不确定性。尽管可以从具有大量任务的元学习中获得强大的先验,但是由于训练数据集的数量通常太小,因此无法保证新任务的精确模型。在这项研究中,首先,在选择初始化参数的过程中,为特定于任务的学习者提出了新方法,以适应性地学习选择最小化新任务损失的初始化参数。然后,我们建议对元损失部分的两种改进的方法:方法1通过比较元损失差异来生成权重,以提高几个类别时的准确性,而方法2引入了每个任务的同质不确定性,以根据多个损失,以基于多个损失。原始的梯度下降是一种增强新型类别的概括能力的方式,同时确保了准确性的提高。与以前的基于梯度的元学习方法相比,我们的模型在回归任务和少量分类中的性能更好,并提高了模型的鲁棒性,对元测试集中的学习率和查询集。
translated by 谷歌翻译
作为一种主动网络安全保护方案,入侵检测系统(IDS)承担以恶意网络流量形式检测网络攻击的重要责任。入侵检测技术是ID的重要组成部分。目前,许多学者已经对入侵检测技术进行了广泛的研究。但是,为大规模网络流量数据开发有效的入侵检测方法仍然很困难。由于生成的对抗网络(GAN)具有强大的建模功能,可用于复杂的高维数据,因此它们为解决此问题提供了新的想法。在本文中,我们提出了一种基于Ebgan的入侵检测方法IDS-Ebgan,该方法将网络记录归类为正常流量或恶意流量。 IDS-Ebgan中的发电机负责将培训中的原始恶意网络流量转换为对抗性恶意示例。这是因为我们想使用对抗性学习来提高歧视者检测恶意流量的能力。同时,鉴别器采用自动编码器模型。在测试过程中,IDS-Ebgan使用歧视器的重建错误来对流量记录进行分类。
translated by 谷歌翻译
An oft-cited open problem of federated learning is the existence of data heterogeneity at the clients. One pathway to understanding the drastic accuracy drop in federated learning is by scrutinizing the behavior of the clients' deep models on data with different levels of "difficulty", which has been left unaddressed. In this paper, we investigate a different and rarely studied dimension of FL: ordered learning. Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL. We present theoretical analysis and conduct extensive empirical studies on the efficacy of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random curriculum. We find that curriculum learning largely alleviates non-IIDness. Interestingly, the more disparate the data distributions across clients the more they benefit from ordered learning. We provide analysis explaining this phenomenon, specifically indicating how curriculum training appears to make the objective landscape progressively less convex, suggesting fast converging iterations at the beginning of the training procedure. We derive quantitative results of convergence for both convex and nonconvex objectives by modeling the curriculum training on federated devices as local SGD with locally biased stochastic gradients. Also, inspired by ordered learning, we propose a novel client selection technique that benefits from the real-world disparity in the clients. Our proposed approach to client selection has a synergic effect when applied together with ordered learning in FL.
translated by 谷歌翻译
Large language models can perform new tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior. Such prompts are typically hand engineered, but can also be learned with gradient-based methods from labeled data. However, it is underexplored what factors make the prompts effective, especially when the prompts are natural language. In this paper, we investigate common attributes shared by effective prompts. We first propose a human readable prompt tuning method (F LUENT P ROMPT) based on Langevin dynamics that incorporates a fluency constraint to find a diverse distribution of effective and fluent prompts. Our analysis reveals that effective prompts are topically related to the task domain and calibrate the prior probability of label words. Based on these findings, we also propose a method for generating prompts using only unlabeled data, outperforming strong baselines by an average of 7.0% accuracy across three tasks.
translated by 谷歌翻译
We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training. We first annotate instructions for 330 diverse tasks and train INSTRUCTOR on this multitask mixture with a contrastive loss. We evaluate INSTRUCTOR on 70 embedding evaluation tasks (66 of which are unseen during training), ranging from classification and information retrieval to semantic textual similarity and text generation evaluation. INSTRUCTOR, while having an order of magnitude fewer parameters than the previous best model, achieves state-of-the-art performance, with an average improvement of 3.4% compared to the previous best results on the 70 diverse datasets. Our analysis suggests that INSTRUCTOR is robust to changes in instructions, and that instruction finetuning mitigates the challenge of training a single model on diverse datasets.
translated by 谷歌翻译
Weakly supervised machine learning algorithms are able to learn from ambiguous samples or labels, e.g., multi-instance learning or partial-label learning. However, in some real-world tasks, each training sample is associated with not only multiple instances but also a candidate label set that contains one ground-truth label and some false positive labels. Specifically, at least one instance pertains to the ground-truth label while no instance belongs to the false positive labels. In this paper, we formalize such problems as multi-instance partial-label learning (MIPL). Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems since the former fail to disambiguate a candidate label set, and the latter cannot handle a multi-instance bag. To address these issues, a tailored algorithm named MIPLGP, i.e., Multi-Instance Partial-Label learning with Gaussian Processes, is proposed. MIPLGP first assigns each instance with a candidate label set in an augmented label space, then transforms the candidate label set into a logarithmic space to yield the disambiguated and continuous labels via an exclusive disambiguation strategy, and last induces a model based on the Gaussian processes. Experimental results on various datasets validate that MIPLGP is superior to well-established multi-instance learning and partial-label learning algorithms for solving MIPL problems. Our code and datasets will be made publicly available.
translated by 谷歌翻译
Recently, the self-supervised pre-training paradigm has shown great potential in leveraging large-scale unlabeled data to improve downstream task performance. However, increasing the scale of unlabeled pre-training data in real-world scenarios requires prohibitive computational costs and faces the challenge of uncurated samples. To address these issues, we build a task-specific self-supervised pre-training framework from a data selection perspective based on a simple hypothesis that pre-training on the unlabeled samples with similar distribution to the target task can bring substantial performance gains. Buttressed by the hypothesis, we propose the first yet novel framework for Scalable and Efficient visual Pre-Training (SEPT) by introducing a retrieval pipeline for data selection. SEPT first leverage a self-supervised pre-trained model to extract the features of the entire unlabeled dataset for retrieval pipeline initialization. Then, for a specific target task, SEPT retrievals the most similar samples from the unlabeled dataset based on feature similarity for each target instance for pre-training. Finally, SEPT pre-trains the target model with the selected unlabeled samples in a self-supervised manner for target data finetuning. By decoupling the scale of pre-training and available upstream data for a target task, SEPT achieves high scalability of the upstream dataset and high efficiency of pre-training, resulting in high model architecture flexibility. Results on various downstream tasks demonstrate that SEPT can achieve competitive or even better performance compared with ImageNet pre-training while reducing the size of training samples by one magnitude without resorting to any extra annotations.
translated by 谷歌翻译
Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked language model that replaces this softmax with a nonparametric distribution over every phrase in a reference corpus. We show that NPM can be efficiently trained with a contrastive objective and an in-batch approximation to full corpus retrieval. Zero-shot evaluation on 9 closed-set tasks and 7 open-set tasks demonstrates that NPM outperforms significantly larger parametric models, either with or without a retrieve-and-generate approach. It is particularly better on dealing with rare patterns (word senses or facts), and predicting rare or nearly unseen words (e.g., non-Latin script). We release the model and code at github.com/facebookresearch/NPM.
translated by 谷歌翻译
聚集的联合学习(FL)已显示通过将客户分组为群集,从而产生有希望的结果。这在单独的客户群在其本地数据的分布方面有显着差异的情况下特别有效。现有的集群FL算法实质上是在试图将客户群体组合在一起,以便同一集群中的客户可以利用彼此的数据来更好地执行联合学习。但是,先前的群集FL算法试图在培训期间间接学习这些分布相似性,这可能会很耗时,因为可能需要许多回合的联合学习,直到群集的形成稳定为止。在本文中,我们提出了一种新的联合学习方法,该方法直接旨在通过分析客户数据子空间之间的主要角度来有效地识别客户之间的分布相似性。每个客户端都以单一的方式在其本地数据上应用截断的奇异值分解(SVD)步骤,以得出一小部分主向量,该量提供了一个签名,可简洁地捕获基础分布的主要特征。提供了一组主要的主向量,以便服务器可以直接识别客户端之间的分布相似性以形成簇。这是通过比较这些主要向量跨越的客户数据子空间之间主要角度的相似性来实现的。该方法提供了一个简单而有效的集群FL框架,该框架解决了广泛的数据异质性问题,而不是标签偏斜的更简单的非iids形式。我们的聚类FL方法还可以为非凸目标目标提供融合保证。我们的代码可在https://github.com/mmorafah/pacfl上找到。
translated by 谷歌翻译
我们提出协调指导矢量字段,以与机器人团队同时完成两个任务:首先,多个机器人的指导和导航到可能嵌入2D或3D中的可能不同的路径或表面;其次,他们的运动协调在跟踪他们的规定路径或表面时。运动配位是由路径或表面上的机器人之间所需的参数位移定义的。通过控制对应于指导矢量场之间的路径或表面参数的虚拟坐标来实现这种所需的位移。由动力学系统理论和Lyapunov理论支撑的严格数学保证,用于从所有初始位置上有效的分布式运动协调和机器人在路径或表面上导航。作为实用机器人应用的一个例子,我们从所提出的具有驱动饱和度的Dubins-car样模型的指导向量场中得出了一种对照算法。我们提出的算法分布并可扩展到任意数量的机器人。此外,广泛的说明性模拟和固定翼飞机户外实验验证了我们算法的有效性和鲁棒性。
translated by 谷歌翻译